专利摘要:
Placement of a region of interest for quantitative ultrasound imaging. Placement method of a region (ROI) of interest in quantitative ultrasound imaging; detecting (10) an anatomical landmark in an ultrasound image; signal processing (11) is carried out; determining (14) a position of an ROI in a field of view of the ultrasound image, the position of the ROI being determined based on the anatomical landmark and from the signal processing (11); Shear wave imaging is performed; producing (18) an image for shear wave imaging. Figure for the abstract: Fig.
公开号:FR3078250A1
申请号:FR1901708
申请日:2019-02-20
公开日:2019-08-30
发明作者:Yassin Labyed
申请人:Siemens Medical Solutions USA Inc;
IPC主号:
专利说明:

The present invention relates to quantitative ultrasound imaging. In quantitative ultrasound imaging, the information detected is further processed to quantify a characteristic of the tissue from which an image has been taken. Rather than simply providing a B-mode image of the tissue, an image of a characteristic of that tissue is provided. This is how, for example, we calculate a shear wave velocity in tissue using ultrasound imaging. Other examples include distortion, attenuation or backscattering measures.
[0003] For quantitative ultrasound imaging, a user typically positions a region (ROI) which is of interest in a B-mode image. ROI defines the region for which the quantification is carried out. To avoid processing delays or complications for full field of view (FOV) quantification of the B-mode image, the ROI, positioned by the user, defines the region of the tissue for quantification. This manual positioning of the ROI interferes with the work flow and increases the scanning times. In addition, operator dependence and suboptimal ROI positioning can result in poor image quality and non-reproducible results. Many users do not position ROI at the correct location, especially when the location may be specific to the type of quantification and application. ROI sizing errors can also result in poor image quality and / or non-reproducible results.
Summary [0005] By way of introduction, the preferred embodiments described below include methods and systems for placing ROI in quantitative ultrasound imaging by an ultrasound scanner. ROI is placed automatically using quantization-specific anatomy detection, signal processing for clutter, attenuation or noise, and / or identification of fluid regions. For quantification, multiple ROIs can be positioned automatically. Automatic placement can improve the consistency of measurements over time and between sonographers and can provide better image quality with less influence from unwanted signals. As a result, there is room for improvement in diagnosis and / or treatment.
According to a first facet, there is provided a method of placing a region (ROI) in which one is interested in quantitative ultrasound imaging, by an ultrasound scanner, characterized in that:
’We detect an anatomical landmark in an ultrasound image;
signal processing of ultrasonic signals in phase and in quadrature or of radio frequency;
the position of an ROI in a field of vision of the ultrasound image is determined by the ultrasound scanner, the position of the ROI being determined on the basis of the anatomical coordinate system and originating from the signal processing;
shear wave imaging is performed by the ultrasound scanner at the ROI position and an image is produced for shear wave imaging.
Preferably:
- detection comprises detecting, by an artificial learning network, or detecting, by image processing, and in which determining the position comprises determining by an artificial learning network,
- the detection comprises detecting a liver capsule and in which the determination comprises determining the position of the ROI based on a location of the liver capsule,
- signal processing comprises measuring congestion in the ultrasonic signals and in which the determination comprises determining the remote position of the locations of the signals having congestion,
- signal processing comprises measuring attenuation in the ultrasonic signals and in which the determination comprises determining a depth of the position of the ROI on the basis of the attenuation,
- the method includes the determination by the ultrasound scanner of a size and a shape of the ROI at the position,
the method further comprises identifying, by the ultrasound scanner, locations of fluid and in which the determination of the position comprises determining the position of the ROI, so as not to include the locations of fluid,
the determination of the position includes determining, by the ultrasound scanner, the position of the ROI and of another position of another ROI and in which the production of the image includes producing the annotated image of a measurement relative between ROI and other ROI.
According to a second facet, there is provided a method of placing a region (ROI) in which we are interested in quantitative ultrasound imaging, by an ultrasound scanner, characterized in that:
the location of a liver capsule in an ultrasound image is detected by the ultrasound scanner;
the position of an ROI in a field of view of the ultrasound image is determined by the ultrasound scanner, the position of the ROI being determined on the basis of the location of the liver capsule;
shear wave imaging is performed by the ultrasound scanner at the ROI position and an image is produced for shear wave imaging.
Preferably:
- the determination of the position includes determining the position at a minimum depth in depth from the location of the liver capsule along a line, which is perpendicular to an edge of the liver capsule,
- the method comprises a signal processing of ultrasonic signals in phase and in quadrature or of radio frequency, the signal processing measuring congestion or attenuation, and in which the determination of the position comprises determining the position on the basis of the 'location of congestion or attenuation.
According to a third facet, there is provided a region placement system (ROI) which is of interest in quantitative ultrasound imaging, characterized in that it comprises:
emission and reception beam formers, connected to a transducer, configured to scan by ultrasound in a B mode and in a quantitative mode;
an image processor, configured to locate an ROI in a B-mode field of view, based on B-mode scan data, to cause the transmit and receive beamformers to scan in the mode quantitative for localized ROI, and to produce an image from the scan in quantitative mode;
a display configured to display the scan image in quantitative mode. Preferably:
- the quantitative mode includes acoustic radiation force imaging and the image processor is configured to locate the ROI on the basis of an anatomical landmark represented in the scan data in mode B,
- the image processor is configured to locate the ROI on the basis of attenuation or congestion determined from ultrasonic signals in phase and quadrature or of radio frequency and based on the scan data in the mode B after detection of mode B.
The above should not be considered as limiting the invention. Other facets and advantages of the invention are described below, in conjunction with the preferred embodiments.
Brief description of the drawings [0014] The elements and figures are not necessarily to scale, the emphasis being rather on illustrating the principles of the invention. Furthermore, in the figures, the same references designate corresponding parts in different views.
Figure 1 is a schematic flow diagram of an embodiment of a method for placing an ROI in quantitative ultrasound imaging by an ultrasound scanner;
Figure 2 is an image in mode B given by way of example, while a ROI is positioned for shear wave imaging of a liver and [0017] Figure 3 is a block diagram of an embodiment of a ROI placement system in quantitative ultrasound imaging by an ultrasound scanner.
Detailed description of the drawings and preferred embodiments presently. An automatic placement of an ROI is provided in shear wave or otherwise quantitative imaging. Signal processing, image processing and / or an application of an artificial learning network automatically positions, sizes and / or conforms an ROI, such as for shear wave speed imaging or other imaging ultrasonic, based on a force pulse of acoustic radiation. For example, in shear wave imagery, the ROI is positioned to obtain a single shear wave velocity measurement based on ROI or for space imagery, real-time, at shear wave speed, of a two-dimensional or three-dimensional region.
FIG. 1 represents an embodiment of a method for placing an ROI in quantitative ultrasound imaging, by an ultrasound scanner. In general, an object in which we are interested at a distance from the location of the ROI, processing of a phase and quadrature (EQ) or radio frequency (RF) data signal, and / or locations to be avoided ( for example, fluid locations) are used to automatically place an ROI for quantitative ultrasound imaging.
The method is carried out by the system shown in Figure 3 or by a different system. For example, an ultrasound imaging system for medical diagnosis detects, processes and / or identifies signals in acts 10, 11 and 12, an image processor makes a determination in act 14 and the imaging system performs quantitative imaging and produces an image in acts 16 and 18. Other devices can perform any of these acts, such as the image processor performing all non-scanning acts .
The acts are performed in the order indicated or in a different order. For example, acts 10, 11 and 12 are performed in any order or simultaneously.
We can use additional acts, different acts or a smaller number of acts. For example, one or two of acts 10, 11 and 12 are not performed. As another example, act 18 is not performed, when the output is a quantification to be stored in a patient record or in a report.
To locate the ROI for quantitative imaging, we acquire an ultrasound data representing a patient or corresponding to a patient. An ultrasound imaging system or scanner scans the patient. Alternatively, the data from a previous scan is acquired by the scanner, such as by transferring a memory or an image archive, and a communication system.
This scan is an initial scan, such as a first scan or a subsequent scan once quantitative imagery is to be used. For example, the scan is repeated while a sonographer positions the transducer to scan the desired region of the patient. The FOV for scanning is positioned on the organ or organs that are of interest. Once the object of interest is in the FOV, the ultrasound data to be used to locate the ROI is available from the scan or is acquired by another scan.
The scanning for the ultrasonic data, to locate the ROI, is over the entire FOV. The lateral or azimuthal extent and the depth of the scan define the FOV. Based on different settings, different dimensions of the FOV can be obtained. The user or system determines the FOV.
To scan a FOV by ultrasound, transmission and reception beams are formed by an ultrasound system. Any scan format can be used, such as by sector, linear or by Vector (trademark) and corresponding FOVs. Scanning lines are distributed by electrical and / or mechanical control, in one dimension, two dimensions or three dimensions, which gives a datum representing a line, an area or a volume.
The characteristics of the transmit and / or receive beam can be adjusted or correspond to parameter values. The depth and / or lateral extent of the FOV is adjusted. Likewise, the focal depth of the emission beam, the transmission frequency, the reception frequency, the line density, the sampling density (sampling distance along a scanning line), the shape of the 'emission wave (e.g. number of cycles and / or shape of envelope), frame rate, aperture and / or other scanning characteristics. You can adjust the number of focal emission positions per scan line (for example, one or two). You can use different, additional, or fewer scan parameters (for example, send and / or receive).
By forming a reception beam, the response data represents samples in the FOV. The data received from the scan is detected. A detector in mode A B determines the intensity of the acoustic echoes represented by the data received. For example, reception signals before detection are formatted as phase and quadrature (I / Q) data, but RF data can be used. A square root of a sum of the squares of the terms in phase and in quadrature is calculated, as being the detection intensity. Other measurements of the magnitude of the acoustic echo can be used for detection in B mode.
Another processing can be carried out in mode B. Thus, for example, the data detected in mode B can be spatially filtered. As another example, we acquire a sequence of frames from a corresponding scan sequence of the entire FOV. We temporally filter different pairs or other dimensioned groupings of data frames obtained in B mode.
In other embodiments, other types of detection and corresponding scanning are carried out. We use, for example, an estimation of color flux (for example, doppler). The speed, power and / or variance are estimated. As another example, a harmonic mode is used, such as imagery at a second harmonic frequency of a fundamental emission frequency. You can use combinations of modes.
After processing, the detected data is transformed by scanning, if necessary. You can produce a two-dimensional image. The A B mode image represents the intensity or return force of acoustic echoes in the B mode FOV. Figure 2 shows an example of a B mode image of a patient's liver. Intensities or data in B mode are mapped to a gray scale in the dynamic range of the display. The gray scale can be equal to or similar to the values of red, green, blue (RGB) used by the display to control pixels. Any color or grayscale match can be used.
The data used for other acts comes from any point in the treatment path. In one embodiment, scalar values detected and transformed by scanning are used, before any color or display matching. In other embodiments, beamformed samples (e.g., I / Q or RF signals) are used before detection, data detected before scan transformation, or display values after display match . The data is in the polar coordinate system used for scanning or is interpolated to a regular network, such as a Cartesian coordinate system.
During live or real-time imaging (scanning and image output at the same time or while the patient has a transducer placed on it), no special interaction is required in general or expected from the user . The user can choose an application (for example, quantitative ultrasound imaging, such as shear wave speed), position the FOV, activate the quantitative imaging, then the remaining configuration occurs automatically. initial or FOV scans before separating ROI scans for quantitative imaging. The scan is configured to stop scans of the patient's FOV while scanning the ROI for quantification. As a variant, a B-mode imagery and a quantitative imagery are interlaced.
In act 10, the ultrasound scanner, using an image processor, detects one or more anatomical landmarks in an ultrasound image or other ultrasound data (for example, data in mode B). The object is detected from the initial or subsequent scan data. The image representing the patient is processed in image to detect the object. Detection is automatic during live imaging. Rather than requiring user input from a location or object location, the processor applies filtering, edge detection, configuration matching, model matching, or other computer-assisted classification to detect the object in the data. Any image processing can be used. The processor detects without user input a location or locations.
In one embodiment, an artificial learning network is applied by the image processor (for example, central processing unit or graphics processing unit). Haar, gradient, direction, control, deep learning or other characteristics are calculated from the ultrasonic data and entered into the artificial learning network. The artificial learning network indicates, on the basis of learning data having a true known distinction between the object and other tissue, fluid or device, if the object is represented by the data and where it is .
Any artificial learning can be used, such as a probabilistic booster tree, a Bayesian network, a neural network, deep learning or a support vector machine. Any feature set can be used. In one embodiment, probabilistic over-amplification trees with marginal space learning form a classifier based on Haar characteristics and capable of being controlled. In another embodiment, a random forest regression is used for training. In yet another embodiment, deep learning is used to define the features and learn how to relate the features to object detection.
The object to be located is any object, such as an anatomy or a device. We locate, for example, a valve. Parts of a liver or other anatomical locations or lesions can be located. In other embodiments, devices, such as surgical instruments or implants (eg, a catheter, tool handle, needle or surgical device, such as a prosthetic ring or valve) are detected, instead of an anatomy. Both an anatomy and the devices to be added can be detected. Different detectors or the same detector detect different anatomy and / or devices. The object and any anatomical region, catheter (for example, lasso) or tool that an artificially learning detector or the like detects.
In an example of shear wave imaging, the object is a liver capsule. The location of the liver capsule is detected in an ultrasound image. FIG. 2 represents the location (for example, arrow pointing to the liver capsule 20) of the liver capsule 20 in a mode B image. Other anatomical features can be detected, such as the edge of the capsule 20 liver.
The anatomy or the device detected has any spatial extent. The anatomy extends, for example, along several pixels, in one direction or in several directions. The anatomy has any shape, such as a gently varying curved shape. Serrated or flat parts may occur. A device can have a smooth surface. Detection provides a location of the object (for example, landmark) of interest. Features, surfaces and / or interior parts of the object can be found. We can use characteristics represented by the data, but not belonging to the object, to locate the object.
In Act 11, the ultrasonic scanner, using the receiving beam former, the beam former, or an image processor, processes ultrasonic signals. We process the I / Q or RL data produced by the receiving beam former, before detection in mode B, the doppler estimate or another detection.
Signal processing allows detection of features reflected in relative phasing or other signal content between locations. For example, the amount of congestion, attenuation, backscatter, and / or noise is different for different locations. In one embodiment, the level of congestion in the ultrasonic signals is measured. The I / Q or RL data is correlated with different scan lines. The level of correlation (for example, the correlation coefficient) indicates the amount of congestion. Data, which is not well correlated to neighboring scan lines, may be subject to congestion. Congestion can interfere with quantification. Any size measurement known at present or to be developed later can be used.
In another embodiment, the attenuation is measured. The decay rate is measured as a function of depth for various locations, after taking into account the gain in depth and / or system effects. The decrease in signal amplitude, as a function of depth or away from the transducer, may be different for different tissues and / or different scan settings. The signal decay rate as a function of depth (i.e. attenuation) corresponds to less signal or to a signal to noise ratio. The quantification may be less precise when the attenuation is greater. Any mitigation measure currently known or to be found later can be used. In Act 12, the ultrasound scanner identifies one or more locations of fluid. Doppler scanning is performed. Due to filtering, Doppler estimates of speed, energy, and / or variance indicate a fluid response to the ultrasound. Alternatively, act 10 image processing is performed to identify fluid surrounding a tissue structure or fluid, which may appear as dark or weak feedback in mode B.
Grouping, image processing (for example, edge detection or threshold detection), an artificial learning classifier or another technique is applied to Doppler estimates to identify regions of fluid. Low pass filtering can be applied to eliminate small fluid response profiles. A surface length or volume threshold can be applied to eliminate a single group or small groups of locations, leaving larger locations of fluid. The fluid regions correspond to blood, cysts or other fluids. FIG. 2 represents certain regions 26 of fluid corresponding to vessels in the liver. The cysts include sufficient fluid content to react as a fluid to ultrasound. Due to the difference in reaction of a fluid to tissue ultrasound, quantification cannot be performed correctly or in fluid regions. Measuring shear waves in a cyst, for example, would be imprecise.
Likewise, bone, medical devices or other solid objects can be identified with respect to soft tissue. A definition against a threshold or other mode B data image processing can indicate bone locations. After filtration or other grouping, bone locations are identified. The bone may have a different acoustic reaction and may therefore interfere with quantification.
In Act 14, the ultrasound scanner, for example using an image processor or a control unit, determines a position of an ROI in an EOV of the ultrasound image. In one embodiment, an artificial learning network is applied. Any artificial learning can be used, as mentioned above. The artificial learning network combines input characteristics, such as the location of a benchmark, congestion levels by location and / or fluid locations with placement of ROI. ROI placed by an expert is used as the basic truth for learning. The machine learns to use the designated inputs (for example, information from Acts 10, 11 and / or 12) to place the ROI on the basis of basic truth. In other embodiments, the B-mode image is also entered or entered instead of the mark and / or fluid locations. Artificial learning learns to place ROI after giving the input image and the congestion, attenuation or other signal processing outputs. The application of the artificial learning network leaves a position for ROI. Alternatively, multiple possible positions are output and rules are used to select the position of the ROI, such as based on avoiding clutter or fluid.
In an alternative embodiment, the determination uses rules. The ROI is placed in position, for example, relative to a reference, but at a distance therefrom, while also avoiding congestion and fluid. The rulers can indicate a precise orientation and a distance to the benchmark with orientation and distance tolerances, to take account of the fact that congestion and fluid are avoided. We can use fuzzy logic.
Determining the position from the information collected in one or more of acts 10, 11 and / or 12. The benchmark or benchmarks, attenuation, congestion and / or other locations fluid are used to position the ROI. ROI defines a scanning region for quantization. Locations outside the ROI, but in the FOV, are not used or are used with a lower density than in the ROI.
In the example of Figure 2, we measure a shear wave speed in the liver. ROI 24 is placed in position relative to the liver capsule 20. The edge of the liver capsule 20 is a benchmark. A line 22 is defined perpendicular to the edge. Various lines are possible. A line is passed through the center of the liver capsule 20 or centered in the FOV. Fa ROI 24 must be, along line 22, a minimum distance from the edge of capsule 20 of the liver. Fa ROI 24 should be, for example, at least 2 cm from the edge of capsule 20 of the liver. Rather than detecting an object or a landmark on which the ROI 24 must be located, we use the landmark to put the ROI 24 at a location remote from the landmark. In alternative embodiments, the ROI 24 is placed on the object or so that it includes the object.
We can put the ROI 24 in position based on the results of signal processing and / or fluid identification. Locations having a relatively large footprint, locations having a relatively large attenuation and / or fluid locations are used to set the ROI 24. The ROI 24 can be set to avoid these locations. ROI 24 has, for example, various possible locations distributed across the FOV and assigned fault priorities. Using the priority list, we test each possible ROI location until we find an ROI location, which avoids clutter and fluid. As a variant, the ROI 24 is placed in position, so as to include one or more of these locations.
For imaging by shear wave of the liver, one can put the ROI 24 in position on the base of the capsule 20 of the liver and to avoid fluid 26 and a relatively large size. ROI 24 has various possible locations given the rules relating ROI 24 to the edge of capsule 20 of the liver. Different lines and / or depths, given the tolerances, provide the possible locations. Each location is checked to avoid fluid and a relatively large size. We put the ROI 24 in position at the location with the smallest footprint and no fluid. Priority can be used for possible locations. Alternatively, a cost function is used, such as by weighting possible locations further from a FOV center as more expensive.
In another embodiment, muscle fiber tissue is detected as a benchmark. We use the direction of the muscle fiber to put the ROI 24 in position. The ROI 24 is placed in position so that the scan for quantitative imaging is along the muscle fibers rather than through it.
We can use certain information to positively place the ROI 24 rather than to indicate what we should avoid. Capsule 24 of the liver is used to positively place in the examples above. An attenuation may indicate a placement depth. The attenuation can, for example, be relatively uniform in the tissue of interest. The attenuation level indicates the depth. When there is greater attenuation in the tissue, the depth is shallower, so that more signal reaches ROI 24. When there is smaller attenuation in the tissue, the depth may be greater for meet the distance to the benchmark.
The ultrasound scanner identifies the ROI as the region to be scanned. The region to be scanned is shaped based on the distribution of the scan lines. For linear scans, the scan lines are parallel. The resulting scanning region is a square or rectangular box. For sector or Vector scans, the scan lines diverge from a point on the face of the transducer or from a virtual point behind the transducer, respectively. Line scan and Vector scan formats scan in a fan-shaped region. The Vector sweep can be a fan-shaped region, without including the point of origin, such as looking like a trapezoid (for example, a truncated triangle). Other forms of scan regions can be used.
ROI 24 has a defect shape and orientation. In one embodiment, the ultrasound scanner determines a shape of the ROI at the determined position. You can use any shape. You can determine the shape to avoid fluid, bulk and / or a mark. Position places, for example, ROI near fluid locations. The shape of the ROI can be changed to avoid fluid locations. Rather than a rectangle, we use a square or rectangle with cut parts. Alternatively or additionally, the shape can be determined to include locations.
We can also determine the orientation so that it includes or avoids certain locations. Orientation can be based on control limits, from a transducer, on detected benchmarks, which can cause acoustic darkening and / or a directional reaction of the quantized tissue.
ROI 24 has a fault dimension. The region can be any dimension, such as 5mm lateral and 10mm axial. In one embodiment, the ultrasound scanner determines a dimension of the ROI 24 at the position. ROI 24 is sized to avoid fluid locations or a relatively large footprint. Alternatively, ROI 24 is sized to include relatively large locations or backscatter (for example, fairly small footprint and fairly small noise).
The quantization scan can be affected by the size of the ROI 24. For shear wave imaging and other quantization scans, the quantification is based on a repetitive scan of the ROI 24. By giving at ROI a smaller dimension, the scanning speed can be increased, which makes the quantification less susceptible to a displacement artifact. By giving the ROI a larger dimension, one can obtain a more representative sampling for quantification. ROI is sized as appropriate for the type of quantification. Various dimensions can be selected based on priority and avoiding locations, which can contribute to inaccuracies or artifacts.
The ROI 24 defining the scanning region for quantitative imaging is smaller than all the FOV of the image in mode B. FIG. 2 represents a rectangular ROI 24, which represents less than 10% of the area of the FOV in the image in mode B.
The ultrasound scanner can automatically determine positions, dimensions, shapes and / or orientations for one, two or more ROI 24. We can compare the measurements of each ROI 24 to facilitate diagnosis or treatment. The scanning data can be combined for each to provide quantization, such as using ROI 24 as the baseline and quantizing based on a difference in measurement between two ROI 24.
A relative shear wave speed may indicate, for example, that the liver tissue is fatty. By positioning a ROI 24 in the liver and another ROI 24 in the kidney, a shear wave speed ratio between the two ROI 24 indicates whether the liver is fatty. The ratio is close to 1.0 if the liver is not fatty. As another example, we use Pulse Acoustic Radiation Imaging (ARFI) to measure stiffness or elasticity of the tissue. The relative stiffness or elasticity between multiple ROIs 24 may indicate regions, which require further study. As another example, a relative strain (for example, a strain ratio) may indicate locations of interest for diagnosis. In yet another example, ratios of maximum tissue displacement in response to ARFI transmissions may be useful. A hepto-renal echogenicity ratio can be calculated using multiple ROIs 24.
In Act 16, the ultrasound scanner performs quantitative imaging. ROI 24 or ROIs 24, as determined, define the scanning locations for quantitative imaging. For example, shear wave imagery is performed by the ultrasonic camera by scanning the position of ROI 24. Shear wave imagery can be used to quantify information useful for diagnosis, such as speed of the shear wave in the tissue, the Young's modulus or a viscoelastic property. Shear wave imagery is a type of ARFI imagery where ARFI is used to produce the shear wave, but other sources of stress and / or other types of ARFI imagery can be used (e.g. , to elasticity). Other types of quantitative imagery can be used, such as distortion, elasticity, backscatter or attenuation.
For shear wave speed imaging, the ultrasound scanner measures shear wave velocities at different locations in a patient's tissue. Velocities are measured based on a distance traveled and a time of the shear wave propagating from an origin to locations different from the ROI. Shear wave speed imaging is performed with separate values of shear wave speed measured for different locations or by combining them to provide a speed for ROI.
Shear wave velocities are based on tissue displacements. The ultrasonic system acquires tissue displacements as a function of time (for example, displacement profiles), for each location in the ROI, but it is possible to use a displacement of the tissue as a function of location, for each different instant. An ARFI (for example, an impulse or acoustic radiation impulse) or other source of stress, produces a shear wave in or near the tissue, in the ROI. As the shear wave travels through the tissue at ROI, the tissue moves. By scanning the tissue with ultrasound, we acquire the data to calculate the displacements as a function of time. Using a correlation or similar measure, the displacements represented by the scans acquired at different times are determined. The maximum displacement and / or phase shift between displacement profiles indicates the time when the shear wave occurs at the location or between locations. The time and distance from the origin of the shear wave are used to resolve the speed of the shear wave at that location. Any shear wave imagery known now or to be found later can be used.
For an ARFI, a beam former produces electrical signals for a focused transmission and a transducer transforms the electrical signals into acoustic signals to transmit the push pulse from the transducer. Acoustic excitation is transmitted to the patient. Acoustic excitation acts as pulse excitation to cause displacement. A 400-cycle emission waveform, having power levels or peak amplitude similar to or less than B-mode transmissions to give an image of the tissue, is emitted as an acoustic beam. Any ARFI or shear wave imaging sequence can be used. Other sources of stress can be used, such as mechanical impact or vibration source. Pulse excitation produces a shear wave at the location in space.
We follow the locations of the fabric. The ultrasonic system, such as a system image processor, tracks movements in response to the push pulse. For each plurality of locations, the displacement caused by the propagating shear wave is followed. Tracking is axial (that is, we follow one-dimensional displacements along the scan line), but it could be two-dimensional or three-dimensional tracking. We find the movements of the tissue for each location, for any number of temporal samplings over a period during which we expect the wave to propagate at the location. By following it at multiple locations, we obtain profiles of displacement of the fabric as a function of time, for the different locations.
To follow, a transducer and a tissue former acquire echo data at different times to determine the displacement of the tissue. Displacement is detected by ultrasonic scanning. For example, B-mode scans are performed to detect movement of the tissue. During a given time, we emit ultrasound towards the tissue or towards a region in which we are interested. For example, pulses lasting from 1 to 5 cycles with an intensity of less than 720 mW / cm 2 are used . Pulses having other intensities can be used. Scanning is performed for any number of scanning lines. Eight or sixteen reception beams, distributed along two dimensions, are formed, for example, in reaction to each transmission. After or while applying a constraint, B-mode transmissions are carried out repeatedly, along a single transmission scan line and receptions, along neighboring reception scan lines.
The intensity in mode B can vary due to the displacement of the tissue as a function of time. For the scanned lines monitored, there is provided a data sequence representing a profile as a function of the time of movement of the tissue from the stress. By performing multiple times of transmission and reception, we receive data representing the region at different times. By repeatedly scanning with ultrasound, the position of the tissue is determined at different times.
We detect the displacement for each of the multiple locations in space. We detect, for example, the speed, the variance, the offset in the intensity configuration (for example, following the granulation) or other information from the received data, as being the displacement between two instants. We can detect an appearance of movement sequence for each of the locations.
In one embodiment using the data in mode B, the data from different scans is axially correlated as a function of time. For each depth or sampling position in space, a correlation is performed on a plurality of depths or sampling positions in space (for example, core of 64 depths, the central depth being the point where the profile is calculated). A present data set is, for example, correlated several times with a reference data set. We identify the location of a data subset, centered at a given location in the reference set, in the present set. Different relative translations between the two datasets are performed.
The level of similarity or correlation of the data is calculated at each of the different offset positions. The translation with the highest correlation represents the displacement or the offset for time associated with the data present compared to the reference.
We use the displacements as a function of time and / or space for the calculation. In one embodiment, the displacements are combined for different depths, leaving remote displacements in azimuth. For example, the average is made as a function of the depth of the displacements for a given scan line or lateral location. As an alternative to calculating the average, a maximum or another selection criterion is used to determine the displacement for a given lateral location. Displacements can be used for only one depth. Displacements can be used for different depths, independently.
We can estimate a separate shear wave speed for each location, such as a function of a lateral distance to an origin of the shear wave. The speed of the shear wave is based on the displacement as a function of time and / or location. The value of the shear wave speed for each location is estimated from the displacement profile or profiles. To estimate the value in one embodiment, the peak or the maximum amplitude of the displacement profile is determined. Based on a distance from the location (for example, subregion center, start point or end point) to the source of the stress (for example, focal position of the ARFI or origin of the wave shear), a time difference between the application of the stress and the peak amplitude indicates a speed. In a variant, the displacement profiles of different locations are correlated to find a delay or phase difference between the locations. This phase shift can be used to calculate the speed between the locations associated with the correlated profiles. In other embodiments, an analytical datum is calculated from the displacement profile and a phase shift is used to determine the elasticity. A phase difference as a function of travel time of different sub-regions, or a zero crossing of the phase for a given sub-region, indicates a speed. In yet another embodiment, the displacement, depending on a location for a given time, indicates a maximum displacement location. The distance from the origin of the shear wave at this location and the time give the speed. We repeat this for other times, in order to find the maximum speed at each location.
The ultrasound scanner can automatically trigger scanning for quantitative imaging. To avoid a displacement artifact, we use a time frame in relation to the respiratory and / or cardiac cycle. A breathing or EKG sensor provides cycle information. The cycle information scanner used to select scan times where the tissue is subjected to less displacement. Alternatively, the user asks the patient to hold their breath and manually initiates quantitative scanning.
Other characteristics of the quantitative scan can be controlled on the basis of the position of the ROI, its size, its shape and / or orientation. The focal position is controlled, for example, so that it is at a central depth of the ROI for the push or follow-up transmission pulse. The deepest ROI can be used to select the thrust and / or the tracking frequency. The F # is selected to provide a uniform push pulse or a uniform ARFI constraint in the ROI (for example, large or short focal length).
In Act 18, the ultrasound scanner produced an image for shear wave imaging. The image produced is displayed on a display device. The image processor, rendering device, or other device produces an image from quantitative imagery for ROI or ROIs.
The image includes one or more quantities representing characteristics of the fabric. One can provide an alphanumeric representation in the graph of a quantity or several quantities, such as a shear wave speed V s for ROI, superimposed, in the form of an annotation, on an image in mode B ( see figure 2). As a variant or in addition, the quantities are displayed for different locations. The quantities for different locations in the ROI modulate, for example, the brightness and / or the color, so that a spatial representation of the quantity is provided in the image. You can superimpose or include the representation in space in an image in B mode or other. The quantity or quantities can be supplied without other type of imaging or can be added or superimposed on other types of ultrasound imaging.
ROI 24 can be presented on an image before, after or during quantitative imaging. ROI 24 is shown, for example, as a graphic image or a B-mode image for the user to verify correct placement and / or view placement.
The image may include annotations or representations of relative measurements. Liver fat is indicated, for example, by an elastic ratio between the liver and the kidney. We may or may not represent the ROIs 24 in the image.
We use ultrasound imaging for diagnosis and treatment. Quantitative, improved, more consistent and / or more precise imaging, due to correct placement of the ROI, gives a better diagnosis and / or treatment by a doctor. The doctor and the patient benefit from the improvement, since the output of quantification is probably more precise.
FIG. 3 represents an embodiment of the system 30 for placing an ROI in quantitative ultrasound imaging. The user configures System 30 for quantitative imaging, such as selecting an application for shear wave speed imaging in the liver. The user can change values of a preset or multiple presets, as desired. Once scanning begins and the FOV is positioned as desired, the system 30 automatically detects the marks, processes the congestion or attenuation signals and / or identifies locations of fluid. ROI is put in place for quantification and the system 30 produces an image or images showing tissue quantifications at ROI.
The system 30 is an ultrasound imaging device or a scanner. In one embodiment, the ultrasound scanner is a medical diagnostic ultrasound imaging system. In other embodiments, the ultrasound imaging device is a personal computer, a workstation, a PACS station or other arrangement in the same location or distributed over a network for real-time imaging or an acquisition station. .
The system 30 implements the method of Figure 1 or other methods. The system 30 includes a beam emitter 31, a transducer 32, a beam emitter 33, an image processor 34, a display 35, a beam former control unit 36 and a memory 37. It is possible to provide for different additional elements or a smaller number of elements. One can provide, for example, a spatial filter, a sweep converter, a correlation processor to adjust a dynamic range and / or an amplifier for the application of a gain. As another example, a user input can be provided.
The emission beam former 31 is an ultrasonic transmitter, a memory, a pulser, an analog circuit, a digital circuit or combinations thereof. The emission beam former 31 is configured to produce waveforms for a plurality of channels, having different or relative amplitudes, delays and / or phase shift, in order to focus a resulting beam at one or more depths. The waveforms are produced and applied to a transducer array at any rate or frequency of pulse repetitions. The emission beam former 31 produces, for example, a sequence of pulses for different lateral and / or edge regions. The pulses have a central frequency.
The beam emitter 31 connects the transducer 32, such as by a transmit / receive switch. After emission of acoustic waves by the transducer 32 in response to the waves produced, one or more beams are formed during a given emission event. The beams are for imaging in B mode, in quantitative mode (for example, ARLI or shear wave imaging) or in another mode. You can use sector, Vector (trademark), linear, or other scan formats. We scan the same region several times to produce a sequence of images or for quantification. The beams formed have an opening, an origin on the transducer 32 and form an angle with the transducer 32. The beams in the LOV have a desired density and line format.
The transducer 32 is a 1-dimensional, 1.25-dimensional, 1.5-dimensional, 1.75-dimensional or two-dimensional network of elements with piezoelectric or capacitive membranes.
The transducer 32 includes a plurality of elements for switching between acoustic and electrical energies. The transducer 32 is, for example, a one-dimensional PZT network, having about 64 to 256 elements. As other examples, the transducer 32 is a transoesophageal echocardiography network (TEE), an intracardiac volume echocardiography network (ICE) or a transthoracic echo network (TTE).
The transducer 32 can be removably connected to the transmitter beam former 31, to transform electric waveforms into acoustic waveforms, and to the receiver beam former 33, to transform echoes acoustic in electrical signals. The transducer 32 includes a plug, which can be plugged into an imaging system or communicates wirelessly with the imaging system. The transducer 32 transmits the send beam where the waveforms have a frequency and are focused on a region of the tissue or in a location of interest in the patient. Acoustic waveforms are produced in response to the application of electrical waveforms to the transducer elements. The transducer 32 transmits acoustic energy and receives echoes. The received signals are produced in response to ultrasonic energy (echoes) arriving on the elements of the transducer 32.
The receiving beam former 33 includes a plurality of channels having amplifiers, delays and / or phase rotators and one or more summers. Each channel is connected to one or more elements of the transducer. The receiving beam former 33 applies relative delays, phases and / or additions to form one or more reception beams, in response to each transmission for detection. We can provide a dynamic focus on reception. The receiving beam former 33 outputs data representing locations in space, using the received acoustic signals. Relative delays and / or phase shifts and summation of signals from different elements give beam formation. The sampling intensity by the receiving beam former 33 is given for a depth range. Timing is used to select the depth range over which sampling occurs. The receiving beams have a desired scan line density at an orientation or orientations using an aperture.
The receiving beam former 33 may comprise a filter, such as a filter for isolating information at a second harmonic or at another frequency band with respect to the emission frequency band. Such information may more likely include desired tissue information, a contrast agent and / or flow. In another embodiment, the receiving beam former 33 includes a memory or buffer and a donor filter or additive. Two or more reception beams are combined to isolate information at a desired frequency band, such as a second harmonic, a cubic fundamental, or another band. Alternatively, the fundamental frequency band can be used.
For shear wave imaging or ARFI, information received in parallel is used. For follow-up trips, a transmission beam covering the ROI is emitted. Two or more are formed (for example 8, 16, 32 or 64 reception beams distributed uniformly or not uniformly in the ROI in reaction to each beam of emission.
The receiving beam former 33 outputs a summed beam data representing locations in space. The summed beam data is in I / Q or RF format. We’re outputting ultrasound signals.
The beam former control unit 36 and / or another processor configures the beam former 12, 16. The beam former control unit 36 is a processor, an application-specific integrated circuit, a user-programmable pre-broadcast circuit, a digital circuit, an analog circuit, a memory, a buffer, combinations thereof, or another configuration device for formers 12, 16 of transmission and reception beam. The beam former control unit 36 can use the memory 37 to acquire and / or buffer values for different beam former parameters. Any command structure or command format can be used to establish the imaging sequence for quantitative imaging, including a B mode scan before and / or interleaved with a quantitative scan. We make the beam formers 12, 16 acquire data for quantitative imaging.
The image processor 34 detects, such as by detecting an intensity, from the samples formed by beam. Any detection can be used, such as B mode detection and / or color flow detection. In one embodiment, a mode B detector is a general processor, an integrated circuit specific to an application or a pre-broadcast network programmable by the user. Logarithmic compression can be provided by the detector in mode B, so that the dynamic range of the data in mode B corresponds to the dynamic range of the display. The image processor 34 may or may not include a scanning transformer.
The image processor 34 includes a control unit, a general processor, an application-specific integrated circuit, a pre-broadcast network programmable by the user, a graphics processing unit or another processor for locating a ROI and perform quantitative ultrasound imaging based on ROI. The image processor 34 includes or interacts with the beam former control unit 36 to scan the ROI in quantitative scanning. The image processor 34 is configured in hardware, software and / or in micro-program.
The image processor 34 can be configured to locate an ROI in a LOV in mode B on the basis of a data detected by scanning in mode B. The ROI is located on the basis of one or more several anatomical landmarks represented in the data, starting from the scanning in mode B. Other scanning modes can be used. Detection is carried out by application of an artificial learning network and / or by image processing. The benchmark can be used to guide the placement of the ROI away from the benchmark, relative to the benchmark and / or on the benchmark.
The image processor 34 can be configured to identify fluid locations or cysts. Doppler estimates produced by the beamforming data indicate locations of fluid or cysts. As a variant, tissues bordered by fluid regions are located in the data in mode B to identify the fluid regions.
The image processor 34 is configured to locate the ROI in the LOV in mode B on the basis of attenuation or congestion. EQ or RE ultrasonic signals output by the beam former are processed as a signal to measure congestion and / or attenuation.
The image processor 34 uses one or more benchmark locations, attenuation, size and / or fluid region to determine the location of the ROI or ROIs. Any fixed rule can be used (for example, by algorithm or by an artificial learning network). We determine the position, size, shape and orientation of the ROI or ROIs.
The image processor 34 is configured to cause the transmit and receive beam formers 31, 33 to scan in the quantitative mode for localized ROI. Based on the position, size, shape and orientation of the ROI or ROIs, the ROI or ROIs are scanned for the quantitative imaging mode. An image is produced from the scan in quantitative mode, such as a shear wave speed image.
Display 20 is a CRT, LCD, monitor, plasma, projector, printer or other device for displaying an image or a sequence of images. Any display 20 known now or to be developed later can be used. Display 20 displays an image in B mode, an image in quantitative mode (for example, annotation or color modulation on an image in B mode) or other images. Display 20 displays one or more images representing ROI or tissue characteristics in ROI.
The beam former control unit 36, the image processor 34 and / or the ultrasonic system 10 operate according to instructions stored in memory 37 or in another memory. The instructions configure the system to perform the acts of FIG. 1. The instructions configure so that operation by charge occurs in a control unit, causing a table of values to be loaded (for example, elasticity imaging sequence ) and / or by being executed. Memory 37 is a computer-readable non-transient memory medium. The instructions for implementing the operations, methods and / or techniques mentioned in this memo are provided on computer-decipherable media or memories, such as cache, buffer, RAM, removable medium, hard disk or other computer-decipherable medium. Computer-readable media includes various types of volatile and non-volatile memory. The functions, acts or tasks illustrated in the figures or described in this memo are performed in response to one or more instruction sets stored in or on computer-readable memory carriers. Functions, acts or tasks are independent of the particular type of instruction set, memory medium, processor or processing strategy and can be performed by software, hardware, integrated circuit, micro program, micro code and the like, operating alone or in combination. Likewise, treatment strategies may include multiprocessing, performing multiple tasks, parallel treatment and the like. In one embodiment, instructions are stored on a removable media device for reading by local or remote systems. In other embodiments, the instructions are stored in a remote location for transfer over a computer network or over telephone lines. In still other embodiments, the instructions are stored in a given computer, a CPU, a GPU or a system.
Although the invention has been described above, with reference to various embodiments, it goes without saying that there can be many changes and modifications without departing from the scope of the invention. The preceding detailed description should therefore be considered as illustrative rather than limiting.
权利要求:
Claims (1)
[1" id="c-fr-0001]
claims
Method for placing a region (ROI) of interest in quantitative ultrasound imaging, using an ultrasound scanner (30), characterized in that:
detecting (10) an anatomical landmark in an ultrasound image;
a signal processing (11) of ultrasonic signal (a) in phase and in quadrature or (b) of radio frequency is carried out;
determining (14), by the ultrasound scanner (30), a position of an ROI in a field of vision of the ultrasound image, the position of the ROI being determined on the basis of the anatomical landmark and originating from the processing (11) of the signal;
shear wave imaging is performed (16) by the ultrasound scanner (30) at the ROI position and an image for shear wave imaging is produced (18). Method according to claim 1, characterized in that the detection (10) comprises detecting (10), by an artificial learning network, or detecting (10), by image processing (11), and in which determining ( 14) the position includes determining (14) by an artificial learning network.
Method according to claim 1 or 2, characterized in that the detection (10) comprises detecting (10) a liver capsule and in which the determination (14) comprises determining (14) the position of the ROI on the basis of a location of the liver capsule.
Method according to one of Claims 1 to 3, characterized in that the signal processing (11) comprises measuring congestion in the ultrasonic signals and in which the determination (14) comprises determining (14) the position at a distance from the locations signals having congestion.
Method according to one of Claims 1 to 4, characterized in that the signal processing (11) comprises measuring an attenuation in the ultrasonic signals and in which the determination (14) comprises determining (14) a depth of the position of ROI based on mitigation.
Method according to one of the preceding claims, further characterized in that it comprises the determination (14) by the ultrasound scanner (30) of a size and a shape of the ROI at the position. Method according to one of the preceding claims, characterized in that [Claim 8] [Claim 9] [Claim 10] [Claim 11] [Claim 12] which it furthermore identifies (12) by the scanner (30 ) ultrasonic, locations of fluid and wherein determining (14) the position includes determining (14) the position of the ROI, so as not to include the locations of fluid.
Method according to any one of the preceding claims, characterized in that the determination (14) of the position comprises determining (14), by the ultrasound scanner (30), the position of the ROI and of another position of another ROI and wherein producing (18) the image includes producing (18) the annotated image of a relative measure between the ROI and the other ROI.
Method for placing a region (ROI) of interest in quantitative ultrasound imaging, by an ultrasound scanner (30), characterized in that:
detecting (10), by the ultrasound scanner, a location of a liver capsule in an ultrasound image;
a position of an ROI in a field of vision of the ultrasound image is determined (14) by the ultrasound scanner (30), the position of the ROI being determined on the basis of the location of the capsule of liver;
shear wave imaging is performed (16) by the ultrasound scanner (30) at the ROI position and an image for shear wave imaging is produced (18). The method of claim 9, wherein determining (14) the position comprises determining (14) the position at a minimum depth from the location of the liver capsule along a line which is perpendicular to an edge of the liver capsule. The method of claim 9 or 10, further characterized in that it includes processing (11) of ultrasonic signal signals (a) in phase and quadrature or (b) of radio frequency, the processing (11 ) of the signal measuring congestion or attenuation, and wherein determining (14) the position includes determining (14) the position based on the location of the congestion or attenuation.
A region placement system (ROI) of interest in quantitative ultrasound imaging, characterized in that it comprises: transmitting and receiving beam formers (31, 33), connected to a transducer, configured to scan by ultrasound in a B mode and in a quantitative mode;
[Claim 13] [Claim 14] an image processor (34), configured to locate an ROI in a B-mode field of view, based on the B-mode scan data, to cause the trainers (31, 33) transmit and receive beam scan in quantitative mode for localized ROI, and to generate an image from scan in quantitative mode;
a display (35) configured to display the image of the scan in quantitative mode.
The system of claim 12, wherein the quantitative mode comprises acoustic radiation force imaging and wherein the image processor (34) is configured to locate the ROI based on an anatomical landmark represented in the scan data in mode B.
System according to claim 12 or 13, characterized in that the image processor (34) is configured to locate the ROI on the basis of attenuation or congestion determined from ultrasonic signals (a) in phase and in quadrature or (b) of radio frequency and based on the scan data in mode B after detection of mode B.
类似技术:
公开号 | 公开日 | 专利标题
FR3005563B1|2019-10-11|CLEAR MEASUREMENTS OF TISSUE PROPERTIES IN ULTRASONIC MEDICAL IMAGING
FR3078250A1|2019-08-30|Placement of a region of interest for quantitative ultrasound imaging
CN104582582B|2017-12-15|Ultrasonic image-forming system memory architecture
US10835210B2|2020-11-17|Three-dimensional volume of interest in ultrasound imaging
US8483488B2|2013-07-09|Method and system for stabilizing a series of intravascular ultrasound images and extracting vessel lumen from the images
FR2934054A1|2010-01-22|SHEAR WAVE IMAGING
JP2006507883A|2006-03-09|A segmentation tool to identify flow areas within an imaging system
US8801614B2|2014-08-12|On-axis shear wave characterization with ultrasound
US8425422B2|2013-04-23|Adaptive volume rendering for ultrasound color flow diagnostic imaging
WO2015087218A1|2015-06-18|Imaging view steering using model-based segmentation
US9261485B2|2016-02-16|Providing color doppler image based on qualification curve information in ultrasound system
CN107212903A|2017-09-29|Relative backscattering coefficient in medical diagnostic ultrasound
FR3047405A1|2017-08-11|
US8696577B2|2014-04-15|Tongue imaging in medical diagnostic ultrasound
US8545411B2|2013-10-01|Ultrasound system and method for adaptively performing clutter filtering
FR2986960A1|2013-08-23|METHOD AND SYSTEM FOR VISUALIZATION OF ASSOCIATED INFORMATION IN ULTRASONIC SHEAR WAVE IMAGING AND COMPUTER-READABLE STORAGE MEDIUM
WO2017013443A1|2017-01-26|A method of, and apparatus for, determination of position in ultrasound imaging
FR3072870A1|2019-05-03|VISCOELASTIC ESTIMATION OF A FABRIC FROM SHEAR SPEED IN ULTRASONIC MEDICAL IMAGING
CN106170254B|2019-03-26|Ultrasound observation apparatus
CN108078626A|2018-05-29|Enhance the detection of needle and visualization method and system in ultrasound data by performing shearing wave elastogram
JP2021530334A|2021-11-11|Systems and methods for guiding the acquisition of ultrasound images
US20190261952A1|2019-08-29|Optimization in ultrasound color flow imaging
US20190298314A1|2019-10-03|Ultrasonic diagnostic device, medical image processing device, and medical image processing method
US20210128108A1|2021-05-06|Loosely coupled probe position and view in ultrasound imaging
US20210161510A1|2021-06-03|Ultrasonic diagnostic apparatus, medical imaging apparatus, training device, ultrasonic image display method, and storage medium
同族专利:
公开号 | 公开日
KR20190103048A|2019-09-04|
US11006926B2|2021-05-18|
KR102223048B1|2021-03-03|
DE102019202545A1|2019-08-29|
CN110192893A|2019-09-03|
US20190261949A1|2019-08-29|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JPS64725B2|1979-12-29|1989-01-09|Fujitsu Ltd|
US6193660B1|1999-03-31|2001-02-27|Acuson Corporation|Medical diagnostic ultrasound system and method for region of interest determination|
US7050610B2|2001-04-04|2006-05-23|Siemens Medical Solutions Usa, Inc.|Method and system for improving the spatial resolution for strain imaging|
US20050033123A1|2003-07-25|2005-02-10|Siemens Medical Solutions Usa, Inc.|Region of interest methods and systems for ultrasound imaging|
US7090640B2|2003-11-12|2006-08-15|Q-Vision|System and method for automatic determination of a region of interest within an image|
US8471866B2|2006-05-05|2013-06-25|General Electric Company|User interface and method for identifying related information displayed in an ultrasound system|
US8469890B2|2009-03-24|2013-06-25|General Electric Company|System and method for compensating for motion when displaying ultrasound motion tracking information|
US10338203B2|2011-09-09|2019-07-02|Siemens Medical Solutions Usa, Inc.|Classification preprocessing in medical ultrasound shear wave imaging|
WO2013170053A1|2012-05-09|2013-11-14|The Regents Of The University Of Michigan|Linear magnetic drive transducer for ultrasound imaging|
US9084576B2|2012-07-13|2015-07-21|Siemens Medical Solutions Usa, Inc.|Automatic doppler gate positioning in spectral doppler ultrasound imaging|
JP5730979B2|2013-11-08|2015-06-10|日立アロカメディカル株式会社|Ultrasonic diagnostic apparatus and elasticity evaluation method|
US10390796B2|2013-12-04|2019-08-27|Siemens Medical Solutions Usa, Inc.|Motion correction in three-dimensional elasticity ultrasound imaging|
EP3120323B1|2014-03-21|2020-05-20|Koninklijke Philips N.V.|Image processing apparatus and method for segmenting a region of interest|
US10297027B2|2014-06-09|2019-05-21|Siemens Healthcare Gmbh|Landmark detection with spatial and temporal constraints in medical imaging|
US10835210B2|2015-03-30|2020-11-17|Siemens Medical Solutions Usa, Inc.|Three-dimensional volume of interest in ultrasound imaging|
US9972093B2|2015-03-30|2018-05-15|Siemens Healthcare Gmbh|Automated region of interest detection using machine learning and extended Hough transform|
JP6216736B2|2015-04-08|2017-10-18|株式会社日立製作所|Ultrasonic diagnostic apparatus and ultrasonic diagnostic method|
JP6651316B2|2015-09-16|2020-02-19|キヤノンメディカルシステムズ株式会社|Ultrasound diagnostic equipment|
JP2020018694A|2018-08-02|2020-02-06|株式会社日立製作所|Ultrasonic diagnostic device and ultrasonic image processing method|US10588605B2|2015-10-27|2020-03-17|General Electric Company|Methods and systems for segmenting a structure in medical images|
DE102019132514B3|2019-11-29|2021-02-04|Carl Zeiss Meditec Ag|Optical observation device and method and data processing system for determining information for distinguishing between tissue fluid cells and tissue cells|
KR102353842B1|2020-04-03|2022-01-25|고려대학교 산학협력단|Method and Apparatus for Automatic Recognition of Region of Interest and Automatic Fixation of Doppler Image Extraction Region Based on Artificial Intelligence Model|
法律状态:
2020-02-11| PLFP| Fee payment|Year of fee payment: 2 |
2021-02-15| PLFP| Fee payment|Year of fee payment: 3 |
2022-02-23| PLFP| Fee payment|Year of fee payment: 4 |
优先权:
申请号 | 申请日 | 专利标题
US15/907,209|US11006926B2|2018-02-27|2018-02-27|Region of interest placement for quantitative ultrasound imaging|
US15/907,209|2018-02-27|
[返回顶部]